Majority of firms using generative AI experience related security incidents – even as it empowers security teams
Almost all organizations using generative AI experience security issues or data breaches linked to the technology — and most say they don’t have the budget to deal with it.
Research by the Capgemini Research Institute found that 97% of organizations using generative AI were affected by data breaches or security concerns linked to generative AI.
Over half (52%) pointed to direct and indirect losses of at least $50 million arising from these incidents. As a result, nearly six in ten (62%) said they needed to increase their budgets to mitigate the risks.
Capgemini said the security issues were caused by an increase in the sophistication of attacks from a wider range of adversaries, as well as the expansion of the cyberattack surface, and vulnerabilities found across all aspects of custom generative AI solutions.
At the same time, the misuse of AI and generative AI by employees can significantly increase the risk of data leakage. Two in three organizations said they were worried about data poisoning and the possible leakage of sensitive data through the collection of data used to train generative AI models.
AI can introduce additional risks via the material it produces, including hallucinations, biased, harmful, or inappropriate content generation. Indeed, 43% of respondents said they had suffered financial losses through the use of deepfakes.
Security benefits
It’s not all bad news. AI is being used to rapidly analyze vast amounts of data, identify patterns, and predict potential breaches, strengthening security across company data, applications, and the cloud.
More than six in ten respondents (64%) reported a reduction of at least 5% in their time-to-detect, and nearly 40% said their remediation time fell by 5% or more after implementing AI in their security operations centers (SOCs).
Three in five (60%) of all organizations surveyed said AI was essential to effective threat response, enabling them to implement proactive security strategies against increasingly sophisticated threat actors.
The same number foresee generative AI as potentially strengthening proactive defense strategies in the long term via faster threat detection, with more than half (58%) also believing that the technology will allow cybersecurity analysts to concentrate more on strategy for combating complex threats.
Double-edged sword
Marco Pereira, global head of cybersecurity, cloud, and infrastructure services at Capgemini, said the research suggested generative AI was proving to be a double-edged sword for companies.
“While it introduces unprecedented risks, organizations are increasingly relying on AI for faster and more accurate detection of cyber incidents,” he said.
“AI and generative AI provide security teams with powerful new tools to mitigate these incidents and transform their defense strategies,” added Pereira. “To ensure they represent a net advantage in the face of evolving threat sophistication, organizations must maintain and prioritize continuous monitoring of the security landscape, build the necessary data management infrastructure, frameworks, and ethical guidelines for AI adoption, and establish robust employee training and awareness programs.”
Earlier this year, Microsoft researchers found that UK organizations using AI tools for cyber security were twice as resilient to attacks as those that didn’t, while those deploying AI-enhanced defenses were able to cut the costs associated with a successful attack by 20%.
And, the research concluded, boosting the use of AI in cyber security could save the UK economy £52 billion annually, down from the £87 billion that cyberattacks currently cost domestic businesses.
Source link